145 research outputs found

    Output Feedback Control for Couple-Group Consensus of Multiagent Systems

    Get PDF
    This paper deals with the couple-group consensus problem for multiagent systems via output feedback control. Both continuous- and discrete-time cases are considered. The consensus problems are converted into the stability problem of the error systems by the system transformation. We obtain two necessary and sufficient conditions of couple-group consensus in different forms for each case. Two different algorithms are used to design the control gains for continuous- and discrete-time case, respectively. Finally, simulation examples are given to show the effectiveness of the proposed results

    Learning Discriminative Representations for Skeleton Based Action Recognition

    Full text link
    Human action recognition aims at classifying the category of human action from a segment of a video. Recently, people have dived into designing GCN-based models to extract features from skeletons for performing this task, because skeleton representations are much more efficient and robust than other modalities such as RGB frames. However, when employing the skeleton data, some important clues like related items are also discarded. It results in some ambiguous actions that are hard to be distinguished and tend to be misclassified. To alleviate this problem, we propose an auxiliary feature refinement head (FR Head), which consists of spatial-temporal decoupling and contrastive feature refinement, to obtain discriminative representations of skeletons. Ambiguous samples are dynamically discovered and calibrated in the feature space. Furthermore, FR Head could be imposed on different stages of GCNs to build a multi-level refinement for stronger supervision. Extensive experiments are conducted on NTU RGB+D, NTU RGB+D 120, and NW-UCLA datasets. Our proposed models obtain competitive results from state-of-the-art methods and can help to discriminate those ambiguous samples. Codes are available at https://github.com/zhysora/FR-Head.Comment: Accepted by CVPR2023. 10 pages, 5 figures, 5 table

    4DRVO-Net: Deep 4D Radar-Visual Odometry Using Multi-Modal and Multi-Scale Adaptive Fusion

    Full text link
    Four-dimensional (4D) radar--visual odometry (4DRVO) integrates complementary information from 4D radar and cameras, making it an attractive solution for achieving accurate and robust pose estimation. However, 4DRVO may exhibit significant tracking errors owing to three main factors: 1) sparsity of 4D radar point clouds; 2) inaccurate data association and insufficient feature interaction between the 4D radar and camera; and 3) disturbances caused by dynamic objects in the environment, affecting odometry estimation. In this paper, we present 4DRVO-Net, which is a method for 4D radar--visual odometry. This method leverages the feature pyramid, pose warping, and cost volume (PWC) network architecture to progressively estimate and refine poses. Specifically, we propose a multi-scale feature extraction network called Radar-PointNet++ that fully considers rich 4D radar point information, enabling fine-grained learning for sparse 4D radar point clouds. To effectively integrate the two modalities, we design an adaptive 4D radar--camera fusion module (A-RCFM) that automatically selects image features based on 4D radar point features, facilitating multi-scale cross-modal feature interaction and adaptive multi-modal feature fusion. In addition, we introduce a velocity-guided point-confidence estimation module to measure local motion patterns, reduce the influence of dynamic objects and outliers, and provide continuous updates during pose refinement. We demonstrate the excellent performance of our method and the effectiveness of each module design on both the VoD and in-house datasets. Our method outperforms all learning-based and geometry-based methods for most sequences in the VoD dataset. Furthermore, it has exhibited promising performance that closely approaches that of the 64-line LiDAR odometry results of A-LOAM without mapping optimization.Comment: 14 pages,12 figure

    Breaking the amyotrophic lateral sclerosis early diagnostic barrier: the promise of general markers

    Get PDF
    Amyotrophic lateral sclerosis (ALS) is a severe neurodegenerative disease that is associated with selective and progressive loss of motor neurons. As a consequence, the symptoms of ALS are muscle cramps and weakness, and it eventually leads to death. The general markers for early diagnosis can assist ALS patients in receiving early intervention and prolonging their survival. Recently, some novel approaches or previously suggested methods have validated the potential for early diagnosis of ALS. The purpose of this review is to summarize the status of current general markers discovery and development for early diagnosis of ALS, including genes, proteins neuroimaging, neurophysiology, neuroultrasound, and machine learning models. The main genetic markers evaluated are superoxide dismutase 1 (SOD1), chromosome 9 open reading frame 72 (C9orf72), transactivation-responsive DNA binding protein 43 (TARDBP), and fused in sarcoma (FUS) genes. Among proteins, neurofilament light chain is still the most established disease-specific adaptive change in ALS. The expression of chitinases, glial fibrillary acidic protein (GFAP), and inflammatory factors are changed in the early stage of ALS. Besides, more patient-friendly and accessible feature assays are explored by the development of neuroimaging, neurophysiology, and neuroultrasound techniques. The novel disease-specific changes exhibited the promising potential for early diagnosis of ALS. All of these general markers still have limitations in the early diagnosis, therefore there is an urgent need for the validation and development of new disease-specific features for ALS

    Retina-Inspired Carbon Nitride-Based Photonic Synapses for Selective Detection of UV Light

    Get PDF
    Photonic synapses combine sensing and processing in a single device, so they are promising candidates to emulate visual perception of a biological retina. However, photonic synapses with wavelength selectivity, which is a key property for visual perception, have not been developed so far. Herein, organic photonic synapses that selectively detect UV rays and process various optical stimuli are presented. The photonic synapses use carbon nitride (C3N4) as an UV-responsive floating-gate layer in transistor geometry. C3N4 nanodots dominantly absorb UV light; this trait is the basis of UV selectivity in these photonic synapses. The presented devices consume only 18.06 fJ per synaptic event, which is comparable to the energy consumption of biological synapses. Furthermore, in situ modulation of exposure to UV light is demonstrated by integrating the devices with UV transmittance modulators. These smart systems can be further developed to combine detection and dose-calculation to determine how and when to decrease UV transmittance for preventive health care.

    CRYSTALpytools: A Python infrastructure for the Crystal code

    Get PDF
    CRYSTALpytools is an open source Python project available on GitHub that implements a user-friendly interface to the Crystal code for quantum-mechanical condensed matter simulations. CRYSTALpytools provides functionalities to: i) write and read Crystal input and output files for a range of calculations (single-point, electronic structure, geometry optimization, harmonic and quasi-harmonic lattice dynamics, elastic tensor evaluation, topological analysis of the electron density, electron transport, and others); ii) extract relevant information; iii) create workflows; iv) post-process computed quantities, and v) plot results in a variety of styles for rapid and precise visual analysis. Furthermore, CRYSTALpytools allows the user to translate Crystal objects (the central data structure of the project) to and from the Structure and Atoms objects of the pymatgen and ASE libraries, respectively. These tools can be used to create, manipulate and visualise complicated structures and write them efficiently to Crystal input files. Jupyter Notebooks have also been developed for the less Python savvy users to guide them in the use of CRYSTALpytools through a user-friendly graphical interface with predefined workflows to complete different specific tasks
    corecore